Skip to content

feat(ui): add interactive REPL with cancellation and OAuth support#75

Open
yamaceay wants to merge 1 commit intoSafeRL-Lab:mainfrom
yamaceay:interactive-repl
Open

feat(ui): add interactive REPL with cancellation and OAuth support#75
yamaceay wants to merge 1 commit intoSafeRL-Lab:mainfrom
yamaceay:interactive-repl

Conversation

@yamaceay
Copy link
Copy Markdown

  • Add interactive REPL with prompt toolkit for rich terminal UI
  • Implement agent cancellation via ESC key during execution
  • Add OAuth token acquisition and caching for MCP HTTP transport
  • Integrate MCP servers during bootstrap phase
  • Add context compaction progress spinner feedback
  • Implement shared state coordination between input and agent threads
  • Add .env file to .gitignore for local configuration

This introduces a new interactive mode that allows users to cancel long-running agent operations and provides better visual feedback during execution. The OAuth flow enables authenticated access to MCP servers over HTTP/SSE transports.

…t, and llmwiki memory plugin

    - Add interactive REPL with prompt toolkit for rich terminal UI
    - Implement agent cancellation via ESC key during execution
    - Add OAuth token acquisition and caching for MCP HTTP transport
    - Integrate MCP servers during bootstrap phase
    - Add context compaction progress spinner feedback
    - Implement shared state coordination between input and agent threads
    - Add llmwiki-plugin example (WikiRead/Write/Append/Search/List/Status tools)
    - Add docs/guides/llmwiki.md with step-by-step setup tutorial
    - Fix tilde expansion in /plugin install for local paths (plugin/store.py)
    - Add .env and .llmwiki.yaml to .gitignore
@chauncygu
Copy link
Copy Markdown
Contributor

Thanks for the substantial work here, the threaded REPL, OAuth, and llmwiki plugin are all useful directions. But this PR bundles 6 unrelated features (~1300 LoC) into one commit and CI is red on 3.11. I'd like to split it before merging. A few items below are blockers regardless.

Blockers

  1. tools/__init__.py rename is wrong. cc_mcp.toolsmcp.tools doesn't exist (the official mcp pip package has no tools submodule). The import silently fails; MCP tools only register because bootstrap.py adds an explicit import cc_mcp.tools. Please revert the rename or delete the dead entry.

  2. Background-thread REPL has correctness gaps.

    • run_query is recursively re-entered (line 946). The new outer wrapper set/clear _agent_running clears the flag while the outer call is still running — main thread then spawns a second thread.
    • The old try/except KeyboardInterrupt around run_query is gone. Daemon-thread exceptions now die silently with no user feedback.
    • Escape calls event.current_buffer.reset(), dropping whatever the user was typing. Should only set _cancel_event.
  3. AskUserQuestion routing races with the type-ahead queue. If the user has already pressed Enter to queue a message before the question fires, the Enter binding's _pending_question check eats that text as the answer. Also, wait() timeout returns the string "(timeout)" which the agent will treat as a real answer.

  4. Non-verbose tool output regression. print_tool_start/end now overwrites with \r and erases on completion. In default mode the user can no longer see which tools the agent ran (only Edit/Write diffs and errors survive). This is a visible UX downgrade — please keep tool traces visible by default or gate behind a config flag.

  5. OAuth security:

    • ~/.cheetahclaws/mcp_oauth_tokens.json is written without chmod 0600 — readable by other users on shared machines.
    • state is generated but never validated in _CallbackHandler.do_GET (CSRF; PKCE is not a substitute).
    • Hardcoded REDIRECT_PORT = 54321 with no fallback if the port is taken.

Smaller things

  • bootstrap.py comment numbering jumps 3 → 5.
  • ui/input.py imports prompt_toolkit.output.vt100._get_size (private API) — will break on PT upgrades.
  • stream_text no longer prints chunks in the non-Live path; output is buffered until flush_response(). SSH / macOS Terminal users lose token-by-token streaming.
  • MCPManager.call_tool replaced an O(1) dict lookup with an O(N) sanitize loop — please cache a sanitized → client map.
  • commands/advanced.py removed the [:60] truncation on tool descriptions; long MCP tool descriptions will wrap awkwardly in /mcp.
  • _load_env doesn't strip quotes (KEY="value" will include the quotes), but the docs claim quotes are fine.

Suggested split

  1. Low-risk batch — can land first: .env loader, ANTHROPIC_BASE_URL, MCP env-var expansion, Path.expanduser() in plugin store, llmwiki plugin + docs.
  2. OAuth — separate PR, after fixing token file mode, state validation, port fallback.
  3. Threaded REPL + /btw + ESC cancel + AskUserQuestion rework — separate PR with tests, with the recursion / KeyboardInterrupt / type-ahead-race issues fixed and CI green.
image

Happy to re-review once it's split. Thanks again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants